Boosting the Transferability of Adversarial Examples with Translation Transformation
نویسندگان
چکیده
Although the adversarial examples have achieved an incredible white-box attack rate, they tend to show poor transferability in black-box attacks. Date augmentation is considered be effective means of enhancing transferability. To this end, based on translation transformation we propose a new method generate more mobile advanced defense model. By optimizing original image, input diversity improved and generated by further training enhanced. This can also combined with gradient, make success rate higher. Experiments ImageNet data sets that proposed superior gradient methods such as MI-FGSM black box attack, while maintaining high white attack. We hope our serve benchmark for assessing robustness networks opponents effectiveness different defence.
منابع مشابه
Improving Transferability of Adversarial Examples with Input Diversity
Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of ...
متن کاملUnderstanding and Enhancing the Transferability of Adversarial Examples
State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs. Moreover, the perturbations can transfer across models: adversarial examples generated for a specific model will often mislead other unseen models. Consequently the adversary can leverage it to attack deployed systems without any ...
متن کاملBlocking Transferability of Adversarial Examples in Black-Box Learning Systems
Advances in Machine Learning (ML) have led to its adoption as an integral component in many applications, including banking, medical diagnosis, and driverless cars. To further broaden the use of ML models, cloud-based services offered by Microsoft, Amazon, Google, and others have developed ML-as-a-service tools as black-box systems. However, ML classifiers are vulnerable to adversarial examples...
متن کاملAdversarial Transformation Networks: Learning to Generate Adversarial Examples
Multiple different approaches of generating adversarial examples have been proposed to attack deep neural networks. These approaches involve either directly computing gradients with respect to the image pixels, or directly solving an optimization on the image pixels. In this work, we present a fundamentally new method for generating adversarial examples that is fast to execute and provides exce...
متن کاملGenerating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of physics
سال: 2021
ISSN: ['0022-3700', '1747-3721', '0368-3508', '1747-3713']
DOI: https://doi.org/10.1088/1742-6596/1955/1/012063